33 research outputs found
Recommended from our members
Computational and Analytical Tools for Resilient and Secure Power Grids
Enhancing power grids' performance and resilience has been one of the greatest challenges in engineering and science over the past decade. A recent report by the National Academies of Sciences, Engineering, and Medicine along with other studies emphasizes the necessity of deploying new ideas and mathematical tools to address the challenges facing the power grids now and in the future. To full this necessity, numerous grid modernization programs have been initiated in recent years. This thesis focuses on one of the most critical challenges facing power grids which is their vulnerability against failures and attacks. Our approach bridges concepts in power engineering and computer science to improve power grids resilience and security. We analyze the vulnerability of power grids to cyber and physical attacks and failures, design efficient monitoring schemes for robust state estimation, develop algorithms to control the grid under tension, and introduce methods to generate realistic power grid test cases. Our contributions can be divided into four major parts:
Power Grid State Prediction: Large scale power outages in Australia (2016), Ukraine (2015), Turkey (2015), India (2013), and the U.S. (2011, 2003) have demonstrated the vulnerability of power grids to cyber and physical attacks and failures. Power grid outages have devastating effects on almost every aspect of modern life as well as on interdependent systems. Despite their inevitability, the effects of failures on power grids' performance can be limited if the system operator can predict and understand the consequences of an initial failure and can immediately detect the problematic failures. To enable these capabilities, we study failures in power grids using computational and analytical tools based on the DC power flow model. We introduce new metrics to efficiently evaluate the severity of an initial failure and develop efficient algorithms to predict its consequences. We further identify power grids' vulnerabilities using these metrics and algorithms.
Power Grid State Estimation: In order to obtain an accurate prediction of the subsequent effects of an initial failure on the performance of the grid, the system operator needs to exactly know when and where the initial failure has happened. However, due to lack of enough measurement devices or a cyber attack on the grid, such information may not be available directly to the grid operator via measurements. To address this problem, we develop efficient methods to estimate the state of the grid and detect failures (if any) from partial available information.
Power Grid Control: Once an initial failure is detected, prediction methods can be used to predict the subsequent effects of that failure. If the initial failure is causing a cascade of failures in the grid, a control mechanism needs to be applied in order to mitigate its further effects. Power Grid Islanding is an effective method to mitigate cascading failures. The challenge is to partition the network into smaller connected components, called islands, so that each island can operate independently for a short period of time. This is to prevent the system to be separated into unbalanced parts due to cascading failures. To address this problem, we introduce and study the Doubly Balanced Connected graph Partitioning (DBCP) problem and provide an efficient algorithm to partition the power grid into two operating islands.
Power Grid Test Cases for Evaluation: In order to evaluate algorithms that are developed for enhancing power grids resilience, one needs to study their performance on the real grid data. However, due to security reasons, such data sets are not publicly available and are very hard to obtain. Therefore, we study the structural properties of the U.S. Western Interconnection grid (WI), and based on the results we present the Network Imitating Method Based on LEarning (NIMBLE) for generating synthetic spatially embedded networks with similar properties to a given grid. We apply NIMBLE to the WI and show that the generated network has similar structural and spatial properties as well as the same level of robustness to cascading failures.
Overall, the results provided in this thesis advance power grids' resilience and security by providing a better understanding of the system and by developing efficient algorithms to protect it at the time of failure
Cascading Failures in Power Grids - Analysis and Algorithms
This paper focuses on cascading line failures in the transmission system of
the power grid. Recent large-scale power outages demonstrated the limitations
of percolation- and epid- emic-based tools in modeling cascades. Hence, we
study cascades by using computational tools and a linearized power flow model.
We first obtain results regarding the Moore-Penrose pseudo-inverse of the power
grid admittance matrix. Based on these results, we study the impact of a single
line failure on the flows on other lines. We also illustrate via simulation the
impact of the distance and resistance distance on the flow increase following a
failure, and discuss the difference from the epidemic models. We then study the
cascade properties, considering metrics such as the distance between failures
and the fraction of demand (load) satisfied after the cascade (yield). We use
the pseudo-inverse of admittance matrix to develop an efficient algorithm to
identify the cascading failure evolution, which can be a building block for
cascade mitigation. Finally, we show that finding the set of lines whose
removal has the most significant impact (under various metrics) is NP-Hard and
introduce a simple heuristic for the minimum yield problem. Overall, the
results demonstrate that using the resistance distance and the pseudo-inverse
of admittance matrix provides important insights and can support the
development of efficient algorithms
LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
We present LINGUIST, a method for generating annotated data for Intent
Classification and Slot Tagging (IC+ST), via fine-tuning AlexaTM 5B, a
5-billion-parameter multilingual sequence-to-sequence (seq2seq) model, on a
flexible instruction prompt. In a 10-shot novel intent setting for the SNIPS
dataset, LINGUIST surpasses state-of-the-art approaches (Back-Translation and
Example Extrapolation) by a wide margin, showing absolute improvement for the
target intents of +1.9 points on IC Recall and +2.5 points on ST F1 Score. In
the zero-shot cross-lingual setting of the mATIS++ dataset, LINGUIST
out-performs a strong baseline of Machine Translation with Slot Alignment by
+4.14 points absolute on ST F1 Score across 6 languages, while matching
performance on IC. Finally, we verify our results on an internal large-scale
multilingual dataset for conversational agent IC+ST and show significant
improvements over a baseline which uses Back-Translation, Paraphrasing and Slot
Catalog Resampling. To our knowledge, we are the first to demonstrate
instruction fine-tuning of a large-scale seq2seq model to control the outputs
of multilingual intent- and slot-labeled data generation.Comment: Accepted to The 29th International Conference on Computational
Linguistics (COLING 2022) October 12-17, 2022, Gyeongju, Republic of Korea
https://coling2022.org
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
In this work, we demonstrate that multilingual large-scale
sequence-to-sequence (seq2seq) models, pre-trained on a mixture of denoising
and Causal Language Modeling (CLM) tasks, are more efficient few-shot learners
than decoder-only models on various tasks. In particular, we train a 20 billion
parameter multilingual seq2seq model called Alexa Teacher Model (AlexaTM 20B)
and show that it achieves state-of-the-art (SOTA) performance on 1-shot
summarization tasks, outperforming a much larger 540B PaLM decoder model.
AlexaTM 20B also achieves SOTA in 1-shot machine translation, especially for
low-resource languages, across almost all language pairs supported by the model
(Arabic, English, French, German, Hindi, Italian, Japanese, Marathi,
Portuguese, Spanish, Tamil, and Telugu) on Flores-101 dataset. We also show in
zero-shot setting, AlexaTM 20B outperforms GPT3 (175B) on SuperGLUE and SQuADv2
datasets and provides SOTA performance on multilingual tasks such as XNLI,
XCOPA, Paws-X, and XWinograd. Overall, our results present a compelling case
for seq2seq models as a powerful alternative to decoder-only models for
Large-scale Language Model (LLM) training